sagemaker clarify
Learn How Amazon SageMaker Clarify Helps Detect Bias
Bias detection in data and model outcomes is a fundamental requirement for building responsible artificial intelligence (AI) and machine learning (ML) models. Unfortunately, detecting bias isn't an easy task for the vast majority of practitioners due to the large number of ways in which it can be measured and different factors that can contribute to a biased outcome. For instance, an imbalanced sampling of the training data may result in a model that is less accurate for certain subsets of the data. Bias may also be introduced by the ML algorithm itself--even with a well-balanced training dataset, the outcomes might favor certain subsets of the data as compared to the others. To detect bias, you must have a thorough understanding of different types of bias and the corresponding bias metrics. For example, at the time of this writing, Amazon SageMaker Clarify offers 21 different metrics to choose from.
Review: AWS AI and Machine Learning stacks up
Amazon Web Services claims to have the broadest and most complete set of machine learning capabilities. I honestly don't know how the company can claim those superlatives with a straight face: Yes, the AWS machine learning offerings are broad and fairly complete and rather impressive, but so are those of Google Cloud and Microsoft Azure. Amazon SageMaker Clarify is the new add-on to the Amazon SageMaker machine learning ecosystem for Responsible AI. SageMaker Clarify integrates with SageMaker at three points: in the new Data Wrangler to detect data biases at import time, such as imbalanced classes in the training set, in the Experiments tab of SageMaker Studio to detect biases in the model after training and to explain the importance of features, and in the SageMaker Model Monitor, to detect bias shifts in a deployed model over time. Historically, AWS has presented its services as cloud-only.
Review: AWS AI and Machine Learning stacks up
Amazon Web Services claims to have the broadest and most complete set of machine learning capabilities. I honestly don't know how the company can claim those superlatives with a straight face: Yes, the AWS machine learning offerings are broad and fairly complete and rather impressive, but so are those of Google Cloud and Microsoft Azure. Amazon SageMaker Clarify is the new add-on to the Amazon SageMaker machine learning ecosystem for Responsible AI. SageMaker Clarify integrates with SageMaker at three points: in the new Data Wrangler to detect data biases at import time, such as imbalanced classes in the training set, in the Experiments tab of SageMaker Studio to detect biases in the model after training and to explain the importance of features, and in the SageMaker Model Monitor, to detect bias shifts in a deployed model over time. Historically, AWS has presented its services as cloud-only.
Recap of AWS re:Invent 2020
This year the annual re:invent conference organized by AWS was virtual, free and three weeks long. During multiple keynotes and sessions, AWS announced new features, improvements and cloud services. Below is a review of the main announcements impacting compute, database, storage, networking, machine learning and development. On the very first day of the conference, Amazon announced EC2 Mac instances for macOS, adding after many years a new operating system to EC2. This is mainly targeted to processes that only run on Mac OS, like building and testing applications for iOS, MacOS, tvOS and Safari.
- North America > United States > New York (0.05)
- North America > United States > Illinois > Cook County > Chicago (0.05)
Amazon Web Services launches new tool to detect bias and blind spots in machine learning
A new feature from Amazon Web Services will alert developers to potential bias in machine learning algorithms, part of a larger effort by the tech industry to keep automated predictions from discriminating against women, people of color and other underrepresented groups. The feature, SageMaker Clarify, was announced at the AWS re:Invent conference Tuesday as a new component of the AWS SageMaker machine learning platform. The technology analyzes the data used to train machine learning models for telltale signs of bias, including data sets that don't accurately reflect the larger population. It also analyzes the machine learning model itself to help ensure the accuracy of the resulting predictions. A 2018 MIT study found that the presence of a disproportionate number of white males in data sets used to train facial recognition algorithms led a larger number of errors in recognizing women and people of color.
Arthur.ai snags $15M Series A to grow machine learning monitoring tool – TechCrunch
At a time when more companies are building machine learning models, Arthur.ai As demand for this type of tool has increased this year, in spite of the pandemic, the startup announced a $15 million Series A today. The investment was led by Index Ventures with help from newcomers Acrew and Plexo Capital, along with previous investors Homebrew, AME Ventures and Work-Bench. The round comes almost exactly a year after its $3.3 million seed round. As CEO and co-founder Adam Wenchel explains, data scientists build and test machine learning models in the lab under ideal conditions, but as these models are put into production, the performance can begin to deteriorate under real-world scrutiny.
AWS expands on SageMaker capabilities with end-to-end features for machine learning – TechCrunch
Nearly three years after it was first launched, Amazon Web Services' SageMaker platform has gotten a significant upgrade in the form of new features making it easier for developers to automate and scale each step of the process to build new automation and machine learning capabilities, the company said. As machine learning moves into the mainstream, business units across organizations will find applications for automation, and AWS is trying to make the development of those bespoke applications easier for its customers. "One of the best parts of having such a widely-adopted service like SageMaker is that we get lots of customer suggestions which fuel our next set of deliverables," said AWS vice president of machine learning, Swami Sivasubramanian. "Today, we are announcing a set of tools for Amazon SageMaker that makes it much easier for developers to build end-to-end machine learning pipelines to prepare, build, train, explain, inspect, monitor, debug and run custom machine learning models with greater visibility, explainability, and automation at scale." Already companies like 3M, ADP, AstraZeneca, Avis, Bayer, Capital One, Cerner, Domino's Pizza, Fidelity Investments, Lenovo, Lyft, T-Mobile, and Thomson Reuters are using SageMaker tools in their own operations, according to AWS.
- Workflow (0.41)
- Press Release (0.41)
AWS announces SageMaker Clarify to help reduce bias in machine learning models – TechCrunch
As companies rely increasingly on machine learning models to run their businesses, it's imperative to include anti-bias measures to ensure these models are not making false or misleading assumptions. Today at AWS re:Invent, AWS introduced Amazon SageMaker Clarify to help reduce bias in machine learning models. "We are launching Amazon SageMaker Clarify. And what that does is it allows you to have insight into your data and models throughout your machine learning lifecycle," Bratin Saha, Amazon VP and general manager of machine learning told TechCrunch. He says that it is designed to analyze the data for bias before you start data prep, so you can find these kinds of problems before you even start building your model.
New – Amazon SageMaker Clarify Detects Bias and Increases the Transparency of Machine Learning Models
Today, I'm extremely happy to announce Amazon SageMaker Clarify, a new capability of Amazon SageMaker that helps customers detect bias in machine learning (ML) models, and increase transparency by helping explain model behavior to stakeholders and customers. As ML models are built by training algorithms that learn statistical patterns present in datasets, several questions immediately come to mind. First, can we ever hope to explain why our ML model comes up with a particular prediction? Second, what if our dataset doesn't faithfully describe the real-life problem we were trying to model? Could we even detect such issues?